-
→ everyone wants their ◊ spaced repetition algorithm to predict ⌑ retention. But as a learner, I want one that makes me learn as fast as possible.
- including learn rate and ⁓ measure cost per trial
-
bad incentives:
- imagine you randomly prompt users to make up a ◊ mnemonic and say that is a coin flip, it either doesn't work or gives 100% retention forever — universal metric would punish you for this, right?
- more specific example: If you have two similar cards in your decks, essentially causing interference, do you trigger an leech prompt that may make it better or worse?
- also, it incentives you (kind of) to build NN algos based on existing/old data, instead of outdoing them
- funnily enough, ‣ Anki would probably "improve" if they wouldn't remove leeches from active learning (because predicting them is easy)
- imagine you randomly prompt users to make up a ◊ mnemonic and say that is a coin flip, it either doesn't work or gives 100% retention forever — universal metric would punish you for this, right?
-
expression of I'm just doing the algo, you do the ⌣ user experience
-
this whole thing is neighboring ⁓ the standard button labels in spaced repetition are unscientific and likely terrible
⁓ SR algos are incentivized to predict retention, not to optimize retention
Backlinks (4)
- how to make the world a better place by building really good tools for learning﹖
- ⁓ instead of classifying items by difficulty in SR, we should probably reduce difficulty
- ⁓ problem with supermemo﹕ if the universal metric is about retrievability prediction accuracy, there is no incentive for actual learning
- ⁓ tracking stuff like 'trailing success rate' or 'streak correct' doesn't make sense if SR is counteracting